In an attempt to solve the lengthy training times of neural networks, weproposed Parallel Circuits (PCs), a biologically inspired architecture.Previous work has shown that this approach fails to maintain generalizationperformance in spite of achieving sharp speed gains. To address this issue, andmotivated by the way Dropout prevents node co-adaption, in this paper, wesuggest an improvement by extending Dropout to the PC architecture. The paperprovides multiple insights into this combination, including a variety of fusionapproaches. Experiments show promising results in which improved error ratesare achieved in most cases, whilst maintaining the speed advantage of the PCapproach.
展开▼